task parameter
- Europe > Switzerland > Geneva > Geneva (0.15)
- Europe > Finland > Uusimaa > Helsinki (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Texas (0.04)
- (4 more...)
Supplementary Marterial: Demixed shared component analysis of neural population data from multiple brain areas
We generated sequences of neuronal populations in areas X (e.g. For each combination, we generated 20 trials, resulting in 300 trials in total. Neurons in areas X and Y were affected by the stimulus and decision, and communicated with each other as follows. Neurons in area X passed the stimulus-related information to the neurons in area Y via a random projection matrix after two time steps from the time when neurons in area X started to process stimulus-related computation. After area Y received the stimulus-related input from area X, neurons in area Y started to compute the decision.
- North America > United States (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States (0.04)
- North America > Canada (0.04)
- (3 more...)
Demixed shared component analysis of neural population data from multiple brain areas
Recent advances in neuroscience data acquisition allow for the simultaneous recording of large populations of neurons across multiple brain areas while subjects perform complex cognitive tasks. Interpreting these data requires us to index how task-relevant information is shared across brain regions, but this is often confounded by the mixing of different task parameters at the single neuron level. Here, inspired by a method developed for a single brain area, we introduce a new technique for demixing variables across multiple brain areas, called demixed shared component analysis (dSCA).
Obstacle Avoidance using Dynamic Movement Primitives and Reinforcement Learning
Urbaniak, Dominik, Agostini, Alejandro, Ramon, Pol, Rosell, Jan, Suárez, Raúl, Suppa, Michael
Abstract--Learning-based motion planning can quickly generate near-optimal trajectories. However, it often requires either large training datasets or costly collection of human demonstrations. This work proposes an alternative approach that quickly generates smooth, near-optimal collision-free 3D Cartesian trajectories from a single artificial demonstration. The demonstration is encoded as a Dynamic Movement Primitive (DMP) and iteratively reshaped using policy-based reinforcement learning to create a diverse trajectory dataset for varying obstacle configurations. This dataset is used to train a neural network that takes as inputs the task parameters describing the obstacle dimensions and location, derived automatically from a point cloud, and outputs the DMP parameters that generate the trajectory. The approach is validated in simulation and real-robot experiments, outperforming a RRT -Connect baseline in terms of computation and execution time, as well as trajectory length, while supporting multi-modal trajectory generation for different obstacle geometries and end-effector dimensions. Videos and the implementation code are available at https://github.com/ A motion planner for autonomous robotic manipulation should be able to quickly generate smooth optimal trajectories in different scenarios [1]. Sampling-based motion planners often struggle to quickly find near-optimal trajectories due to frequent online resampling [2], [3].
- Europe > Austria > Tyrol > Innsbruck (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- (2 more...)
- Overview (0.68)
- Research Report (0.64)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.71)